281 research outputs found

    Neural Representations for Sensory-Motor Control I: Head-Centered 3-D Target Positions from Opponent Eye Commands

    Full text link
    This article describes how corollary discharges from outflow eye movement commands can be transformed by two stages of opponent neural processing into a head-centered representation of 3-D target position. This representation implicitly defines a cyclopean coordinate system whose variables approximate the binocular vergence and spherical horizontal and vertical angles with respect to the observer's head. Various psychophysical data concerning binocular distance perception and reaching behavior are clarified by this representation. The representation provides a foundation for learning head-centered and body-centered invariant representations of both foveated and non-foveated 3-D target positions. It also enables a solution to be developed of the classical motor equivalence problem, whereby many different joint configurations of a redundant manipulator can all be used to realize a desired trajectory in 3-D space.Air Force Office of Scientific Research (URI 90-0175); Defense Advanced Research Projects Agency (AFOSR-90-0083); National Science Foundation (IRI-87-16960, IRI-90-24877

    Neural Representations for Sensory-Motor Control, II: Learning a Head-Centered Visuomotor Representation of 3-D Target Position

    Full text link
    A neural network model is described for how an invariant head-centered representation of 3-D target position can be autonomously learned by the brain in real time. Once learned, such a target representation may be used to control both eye and limb movements. The target representation is derived from the positions of both eyes in the head, and the locations which the target activates on the retinas of both eyes. A Vector Associative Map, or YAM, learns the many-to-one transformation from multiple combinations of eye-and-retinal position to invariant 3-D target position. Eye position is derived from outflow movement signals to the eye muscles. Two successive stages of opponent processing convert these corollary discharges into a. head-centered representation that closely approximates the azimuth, elevation, and vergence of the eyes' gaze position with respect to a cyclopean origin located between the eyes. YAM learning combines this cyclopean representation of present gaze position with binocular retinal information about target position into an invariant representation of 3-D target position with respect to the head. YAM learning can use a teaching vector that is externally derived from the positions of the eyes when they foveate the target. A YAM can also autonomously discover and learn the invariant representation, without an explicit teacher, by generating internal error signals from environmental fluctuations in which these invariant properties are implicit. YAM error signals are computed by Difference Vectors, or DVs, that are zeroed by the YAM learning process. YAMs may be organized into YAM Cascades for learning and performing both sensory-to-spatial maps and spatial-to-motor maps. These multiple uses clarify why DV-type properties are computed by cells in the parietal, frontal, and motor cortices of many mammals. YAMs are modulated by gating signals that express different aspects of the will-to-act. These signals transform a single invariant representation into movements of different speed (GO signal) and size (GRO signal), and thereby enable YAM controllers to match a planned action sequence to variable environmental conditions.National Science Foundation (IRI-87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-1309

    A Self-Organizing Neural Network for Learning a Body-Centered Invariant Representation of 3-D Target Position

    Full text link
    This paper describes a self-organizing neural network that rapidly learns a body-centered representation of 3-D target positions. This representation remains invariant under head and eye movements, and is a key component of sensory-motor systems for producing motor equivalent reaches to targets (Bullock, Grossberg, and Guenther, 1993).National Science Foundation (IRI-87-16960, IRI-90-24877); Air Force Office of Scientific Research (F49620-92-J-0499

    Neural Representations for Sensory-Motor Control, III: Learning a Body-Centered Representation of 3-D Target Position

    Full text link
    A neural model is described of how the brain may autonomously learn a body-centered representation of 3-D target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation--otherwise known as a parcellated distributed representation--of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of non-foveated target position to learn a visuomotor representation of both foveated and non-foveated target position that is capable of commanding yoked eye movementes. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates arc compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI -87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-l309

    PSACNN: Pulse Sequence Adaptive Fast Whole Brain Segmentation

    Full text link
    With the advent of convolutional neural networks~(CNN), supervised learning methods are increasingly being used for whole brain segmentation. However, a large, manually annotated training dataset of labeled brain images required to train such supervised methods is frequently difficult to obtain or create. In addition, existing training datasets are generally acquired with a homogeneous magnetic resonance imaging~(MRI) acquisition protocol. CNNs trained on such datasets are unable to generalize on test data with different acquisition protocols. Modern neuroimaging studies and clinical trials are necessarily multi-center initiatives with a wide variety of acquisition protocols. Despite stringent protocol harmonization practices, it is very difficult to standardize the gamut of MRI imaging parameters across scanners, field strengths, receive coils etc., that affect image contrast. In this paper we propose a CNN-based segmentation algorithm that, in addition to being highly accurate and fast, is also resilient to variation in the input acquisition. Our approach relies on building approximate forward models of pulse sequences that produce a typical test image. For a given pulse sequence, we use its forward model to generate plausible, synthetic training examples that appear as if they were acquired in a scanner with that pulse sequence. Sampling over a wide variety of pulse sequences results in a wide variety of augmented training examples that help build an image contrast invariant model. Our method trains a single CNN that can segment input MRI images with acquisition parameters as disparate as T1T_1-weighted and T2T_2-weighted contrasts with only T1T_1-weighted training data. The segmentations generated are highly accurate with state-of-the-art results~(overall Dice overlap=0.94=0.94), with a fast run time~(\approx 45 seconds), and consistent across a wide range of acquisition protocols.Comment: Typo in author name corrected. Greves -> Grev

    Origins of the extragalactic background at 1mm from a combined analysis of the AzTEC and MAMBO data in GOODS-N

    Get PDF
    We present a study of the cosmic infrared background, which is a measure of the dust obscured activity in all galaxies in the Universe. We venture to isolate the galaxies responsible for the background at 1mm; with spectroscopic and photometric redshifts we constrain the redshift distribution of these galaxies. We create a deep 1.16mm map (sigma ~ 0.5mJy) by combining the AzTEC 1.1mm and MAMBO 1.2mm datasets in GOODS-N. This combined map contains 41 secure detections, 13 of which are new. By averaging the 1.16mm flux densities of individually undetected galaxies with 24um flux densities > 25uJy, we resolve 31--45 per cent of the 1.16mm background. Repeating our analysis on the SCUBA 850um map, we resolve a higher percentage (40--64 per cent) of the 850um background. A majority of the background resolved (attributed to individual galaxies) at both wavelengths comes from galaxies at z > 1.3. If the ratio of the resolved submillimeter to millimeter background is applied to a reasonable scenario for the origins of the unresolved submillimeter background, 60--88 per cent of the total 1.16mm background comes from galaxies at z > 1.3.Comment: 12 pages, 10 figures. Accepted by MNRAS. The combined map is publicly available at http://www.astro.umass.edu/~pope/goodsn_mm

    Learning the Effect of Registration Hyperparameters with HyperMorph

    Full text link
    We introduce HyperMorph, a framework that facilitates efficient hyperparameter tuning in learning-based deformable image registration. Classical registration algorithms perform an iterative pair-wise optimization to compute a deformation field that aligns two images. Recent learning-based approaches leverage large image datasets to learn a function that rapidly estimates a deformation for a given image pair. In both strategies, the accuracy of the resulting spatial correspondences is strongly influenced by the choice of certain hyperparameter values. However, an effective hyperparameter search consumes substantial time and human effort as it often involves training multiple models for different fixed hyperparameter values and may lead to suboptimal registration. We propose an amortized hyperparameter learning strategy to alleviate this burden by learning the impact of hyperparameters on deformation fields. We design a meta network, or hypernetwork, that predicts the parameters of a registration network for input hyperparameters, thereby comprising a single model that generates the optimal deformation field corresponding to given hyperparameter values. This strategy enables fast, high-resolution hyperparameter search at test-time, reducing the inefficiency of traditional approaches while increasing flexibility. We also demonstrate additional benefits of HyperMorph, including enhanced robustness to model initialization and the ability to rapidly identify optimal hyperparameter values specific to a dataset, image contrast, task, or even anatomical region, all without the need to retrain models. We make our code publicly available at http://hypermorph.voxelmorph.net.Comment: Accepted for publication at the Journal of Machine Learning for Biomedical Imaging (MELBA) at https://www.melba-journal.or

    Asymmetric projections of the arcuate fasciculus to the temporal cortex underlie lateralized language function in the human brain

    Get PDF
    The arcuate fasciculus (AF) in the human brain has asymmetric structural properties. However, the topographic organization of the asymmetric AF projections to the cortex and its relevance to cortical function remain unclear. Here we mapped the posterior projections of the human AF in the inferior parietal and lateral temporal cortices using surface-based structural connectivity analysis based on diffusion MRI and investigated their hemispheric differences. We then performed the cross-modal comparison with functional connectivity based on resting-state functional MRI (fMRI) and task-related cortical activation based on fMRI using a semantic classification task of single words. Structural connectivity analysis showed that the left AF connecting to Broca's area predominantly projected in the lateral temporal cortex extending from the posterior superior temporal gyrus to the mid part of the superior temporal sulcus and the middle temporal gyrus, whereas the right AF connecting to the right homolog of Broca's area predominantly projected to the inferior parietal cortex extending from the mid part of the supramarginal gyrus to the anterior part of the angular gyrus. The left-lateralized projection regions of the AF in the left temporal cortex had asymmetric functional connectivity with Broca's area, indicating structure-function concordance through the AF. During the language task, left-lateralized cortical activation was observed. Among them, the brain responses in the temporal cortex and Broca's area that were connected through the left-lateralized AF pathway were specifically correlated across subjects. These results suggest that the human left AF, which structurally and functionally connects the mid temporal cortex and Broca's area in asymmetrical fashion, coordinates the cortical activity in these remote cortices during a semantic decision task. The unique feature of the left AF is discussed in the context of the human capacity for language.National Institutes of Health (U.S.) (Grant R01NS069696)National Institutes of Health (U.S.) (Grant P41EB015896)National Institutes of Health (U.S.) (Grant S10ODRR031599)National Institutes of Health (U.S.) (Grant S10RR021110)National Science Foundation (U.S.) (Grant NFS-DMS-1042134)Uehara Memorial Foundation (Fellowship)Society of Nuclear Medicine and Molecular Imaging (Wagner-Torizuka Fellowship)United States. Dept. of Energy (Grant DE-SC0008430
    corecore